AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Training-Free Pruning

# Training-Free Pruning

Mistral 7B Instruct V0.2 Sparsity 30 V0.1
Apache-2.0
Mistral-7B-Instruct-v0.2 is an enhanced instruction fine-tuned large language model based on Mistral-7B-Instruct-v0.1, achieving 30% sparsity through Wanda pruning method without requiring retraining while maintaining competitive performance.
Large Language Model Transformers
M
wang7776
75
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase